Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2020/09.30.13.05
%2 sid.inpe.br/sibgrapi/2020/09.30.13.05.44
%@doi 10.1109/SIBGRAPI51738.2020.00009
%T Why are Generative Adversarial Networks so Fascinating and Annoying?
%D 2020
%A Faria, Fabio Augusto,
%A Carneiro, Gustavo,
%@affiliation Universidade Federal de São Paulo
%@affiliation The University of Adelaide
%E Musse, Soraia Raupp,
%E Cesar Junior, Roberto Marcondes,
%E Pelechano, Nuria,
%E Wang, Zhangyang (Atlas),
%B Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)
%C Porto de Galinhas (virtual)
%8 7-10 Nov. 2020
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K GAN, machine learning, computer vision, deep learning.
%X This paper focuses on one of the most fascinating and successful, but challenging generative models in the literature: the Generative Adversarial Networks (GAN). Recently, GAN has attracted much attention by the scientific community and the entertainment industry due to its effectiveness in generating complex and high-dimension data, which makes it a superior model for producing new samples, compared with other types of generative models. The traditional GAN (referred to as the Vanilla GAN) is composed of two neural networks, a generator and a discriminator, which are modeled using a minimax optimization. The generator creates samples to fool the discriminator that in turn tries to distinguish between the original and created samples. This optimization aims to train a model that can generate samples from the training set distribution. In addition to defining and explaining the Vanilla GAN and its main variations (e.g., DCGAN, WGAN, and SAGAN), this paper will present several applications that make GAN an extremely exciting method for the entertainment industry (e.g., style-transfer and image-to-image translation). Finally, the following measures to assess the quality of generated images are presented: Inception Search (IS), and Frechet Inception Distance (FID).
%@language en
%3 GAN_Tutorial_SIBGRAPI2020.pdf


Close